POSITIVE DEFINITE DISTRIBUTIONS AND SUBSPACES OF L−p WITH APPLICATIONS TO STABLE PROCESSES
نویسندگان
چکیده
We define embedding of an n-dimensional normed space into L−p, 0 < p < n by extending analytically with respect to p the corresponding property of the classical Lp-spaces. The well-known connection between embeddings into Lp and positive definite functions is extended to the case of negative p by showing that a normed space embeds in L−p if and only if ‖x‖ is a positive definite distribution. Using this criterion, we generalize the recent solutions to the 1938 Schoenberg’s problems by proving that the spaces lq , 2 < q ≤ ∞ embed in L−p if and only if p ∈ [n− 3, n). We show that the technique of embedding in L−p can be applied to stable processes in some situations where standard methods do not work. As an example, we prove inequalities of correlation type for the expectations of norms of stable vectors. In particular, for every p ∈ [n−3, n), E(maxi=1,...,n |Xi| ) ≥ E(maxi=1,...,n |Yi| ), where X1, ...,Xn and Y1, ..., Yn are jointly q-stable symmetric random variables, 0 < q ≤ 2, so that, for some k ∈ N, 1 ≤ k < n, the vectors (X1, ...,Xk) and (Xk+1, ...,Xn) have the same distributions as (Y1, ..., Yk) and (Yk+1, ..., Yn), respectively, but Yi and Yj are independent for every choice of 1 ≤ i ≤ k, k + 1 ≤ j ≤ n.
منابع مشابه
Kernels, Associated Structures and Generalizations
This paper gives a survey of results in the mathematical literature on positive definite kernels and their associated structures. We concentrate on properties which seem potentially relevant for Machine Learning and try to clarify some results that have been misused in the literature. Moreover we consider different lines of generalizations of positive definite kernels. Namely we deal with opera...
متن کاملNORGES TEKNISK-NATURVITENSKAPELIGE UNIVERSITET Parameter Estimation in High Dimensional Gaussian Distributions
In order to compute the log-likelihood for high dimensional Gaussian models, it is necessary to compute the determinant of the large, sparse, symmetric positive definite precision matrix. Traditional methods for evaluating the log-likelihood, which are typically based on Choleksy factorisations, are not feasible for very large models due to the massive memory requirements. We present a novel ap...
متن کاملImage Classification via Sparse Representation and Subspace Alignment
Image representation is a crucial problem in image processing where there exist many low-level representations of image, i.e., SIFT, HOG and so on. But there is a missing link across low-level and high-level semantic representations. In fact, traditional machine learning approaches, e.g., non-negative matrix factorization, sparse representation and principle component analysis are employed to d...
متن کاملHorizontal Dimensionality Reduction and Iterated Frame Bundle Development
In Euclidean vector spaces, dimensionality reduction can be centered at the data mean. In contrast, distances do not split into orthogonal components and centered analysis distorts inter-point distances in the presence of curvature. In this paper, we define a dimensionality reduction procedure for data in Riemannian manifolds that moves the analysis from a center point to local distance measure...
متن کاملThe Canonical Decomposition of Bivariate Distributions P
The ordinary notion of a bivariate distribution has a natural generalisation. For this generalisation it is shown that a bivariate distribution can be characterised by a Hilbert space .%’ and a family da , 0 < p < 1, of subspaces of Z’. X specifies the marginal distributions whilst-X, is a summary of the dependence structure. This characterisation extends existing ideas on canonical correlation.
متن کامل